Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Service function chain deployment optimization method based on node comprehensive importance ranking
Haiyan HU, Qiaoyan KANG, Shuo ZHAO, Jianfeng WANG, Youbin FU
Journal of Computer Applications    2023, 43 (3): 860-868.   DOI: 10.11772/j.issn.1001-9081.2022020257
Abstract225)   HTML6)    PDF (3406KB)(116)       Save

In order to meet the requirements of high reliability and low latency in the 5G network environment, and reduce the resource consumption of network bandwidth at the same time, a Service Function Chain (SFC) deployment method based on node comprehensive importance ranking for traffic and reliability optimization was proposed. Firstly, Virtualized Network Function (VNF) was aggregated based on the rate of traffic change, which reduced the deployed physical nodes and improved link reliability. Secondly, node comprehensive importance was defined by the degree, reliability, comprehensive delay and link hop account of the node in order to sort the physical nodes. Then, the VNFs were mapped to the underlying physical nodes in turn. At the same time, by restricting the number of links, the “ping-pong effect” was reduced and the traffic was optimized. Finally, the virtual link was mapped through k-shortest path algorithm to complete the deployment of the entire SFC. Compared with the original aggregation method, the proposed method has the SFC reliability improved by 2%, the end-to-end delay of SFC reduced by 22%, the bandwidth overhead reduced by 29%, and the average long-term revenue-to-cost ratio increased by 16%. Experimental results show that the proposed method can effectively improve the link reliability, reduce end-to-end delay and bandwidth resource consumption, and play a good optimization effect.

Table and Figures | Reference | Related Articles | Metrics
Chinese named entity recognition based on knowledge base entity enhanced BERT model
Jie HU, Yan HU, Mengchi LIU, Yan ZHANG
Journal of Computer Applications    2022, 42 (9): 2680-2685.   DOI: 10.11772/j.issn.1001-9081.2021071209
Abstract519)   HTML23)    PDF (1391KB)(476)       Save

Aiming at the problem that the pre-training model BERT (Bidirectional Encoder Representation from Transformers) lacks of vocabulary information, a Chinese named entity recognition model called OpenKG + Entity Enhanced BERT + CRF (Conditional Random Field) based on knowledge base entity enhanced BERT model was proposed on the basis of the semi-supervised entity enhanced minimum mean-square error pre-training model. Firstly, documents were downloaded from Chinese general encyclopedia knowledge base CN-DBPedia and entities were extracted by Jieba Chinese text segmentation to expand entity dictionary. Then, the entities in the dictionary were embedded into BERT for pre-training. And the word vectors obtained from the training were input into Bidirectional Long-Short-Term Memory network (BiLSTM) for feature extraction. Finally, the results were corrected by CRF and output. Model validation was performed on datasets CLUENER 2020 and MSRA, and the proposed model was compared with Entity Enhanced BERT pre-training, BERT+BiLSTM, ERNIE and BiLSTM+CRF models. Experimental results show that compared with these four models, the proposed model has the F1 score increased by 1.63 percentage points and 1.1 percentage points, 3.93 percentage points and 5.35 percentage points, 2.42 percentage points and 4.63 percentage points, 6.79 and 7.55 percentage points, respectively in the two datasets. It can be seen that the comprehensive effect of the proposed model on named entity recognition is effectively improved, and the F1 scores of the model are better than those of the comparison models.

Table and Figures | Reference | Related Articles | Metrics
Crowd counting network based on multi-scale spatial attention feature fusion
DU Peide, YAN Hua
Journal of Computer Applications    2021, 41 (2): 537-543.   DOI: 10.11772/j.issn.1001-9081.2020060793
Abstract402)      PDF (1581KB)(1341)       Save
Concerning the poor performance problem of crowd counting tasks in different dense scenes caused by severe scale changes and occlusions, a new Multi-scale spatial Attention Feature fusion Network (MAFNet) was proposed based on the Congested Scene Recognition Network (CSRNet) by combining the multi-scale feature fusion structure and the spatial attention module. Before extracting features with MAFNet, the scene images with head markers were processed with the Gaussian filter to obtain the ground truth density maps of images. In addition, the method of jointly using two basic loss functions was proposed to constrain the consistency of the density estimation map and the ground truth density map. Next, with the multi-scale feature fusion structure as the backbone of MAFNet, the strategy of extracting and fusing multi-scale features simultaneously was used to obtain the multi-scale fusion feature map, then the feature maps were calibrated and refused by the spatial attention module. After that, an estimated density image was generated through dilated convolution, and the number of people in the scene was obtained by integrating the estimated density image pixel by pixel. To verify the effectiveness of the proposed model, evaluations were conducted on four datasets (ShanghaiTech, UCF_CC_50, UCF_QRNF and World-Expo'10). Experimental results on ShanghaiTech dataset PartB show that, compared with CSRNet, MAFNet has a Mean Absolute Error (MAE) reduction of 34.9% and a Mean Square Error (MSE) reduction of 29.4%. Furthermore, experimental results on multiple datasets show that by using the attention mechanism and multi-scale feature fusion strategy, MAFNet can extract more detailed information and reduce the impact of scale changes and occlusions.
Reference | Related Articles | Metrics
Modeling and solving of high-dimensional multi-objective adaptive allocation for emergency relief supplies
YAN Huajian, ZHANG Guofu, SU Zhaopin, LIU Yang
Journal of Computer Applications    2020, 40 (8): 2410-2419.   DOI: 10.11772/j.issn.1001-9081.2020010045
Abstract335)      PDF (1120KB)(446)       Save
To seek a good balance between efficiency and fairness in emergency relief supply allocation, a high-dimensional multi-objective adaptive allocation algorithm based on two-dimensional integer encoding was developed. First of all, a high-dimensional multi-objective optimization model was constructed with the consideration of total emergency response time, panic degree of the victims, unsatisfactory degree of relief supplies, fairness of supply allocation, loss of the victims, and total cost of emergency response. Then, two-dimensional integer encoding and Adaptive Individual Repair (AIR) were adopted to resolve potential emergency resource conflicts. Finally, the shift-based density estimation and Strength Pareto Evolutionary Algorithm 2 (SPEA2) were introduced to design a high-dimensional multi-objective allocation algorithm for disaster relief supplies. Simulation results show that compared with Encoding Repair and Non-dominated Sorting based Differential Evolution algorithm (ERNS-DE) and Greedy-Search-based Multi-Objective Genetic Algorithm (GSMOGA), the proposed algorithm had coverage values increased by 34.87%, 100% and 23.59%, 100% in two emergency environments, respectively. Moreover, the hypervolume values of the proposed algorithm were much higher than those of the two comparison algorithms. Experimental results verify that the proposed model and algorithm allow decision makers to select emergency schemes according to actual emergency needs, and have better flexibility and efficiency.
Reference | Related Articles | Metrics
High order TV image reconstruction algorithm based on Chambolle-Pock algorithm framework
XI Yarui, QIAO Zhiwei, WEN Jing, ZHANG Yanjiao, YANG Wenjing, YAN Huiwen
Journal of Computer Applications    2020, 40 (6): 1793-1798.   DOI: 10.11772/j.issn.1001-9081.2019111955
Abstract512)      PDF (720KB)(382)       Save
The traditional Total Variation (TV) minimization algorithm is a classical iterative reconstruction algorithm based on Compressed Sensing (CS), and can accurately reconstruct images from sparse and noisy data. However, the block artifacts may be brought by the algorithm during the reconstruction of image having not obvious piecewise constant feature. Researches show that the use of High Order Total Variation (HOTV) in the image denoising can effectively suppress the block artifacts brought by the TV model. Therefore, a HOTV image reconstruction model and its Chambolle-Pock (CP) solving algorithm were proposed. Specifically, the second order TV norm was constructed by using the second order gradient, then a data fidelity constrained second order TV minimization model was designed, and the corresponding CP algorithm was derived. The Shepp-Logan phantom in wave background, grayscale gradual changing phantom and real CT phantom were used to perform the image reconstruction experiments and qualitative and quantitative analysis under ideal data projection and noisy data projection conditions. The reconstruction results of ideal data projection show that compared to the traditional TV algorithm, the HOTV algorithm can effectively suppress the block artifacts and improve the reconstruction accuracy. The reconstruction results of noisy data projection show that both the traditional TV algorithm and the HOTV algorithm have good denoising effect but the HOTV algorithm is able to protect the image edge information better and has higher anti-noise performance. The HOTV algorithm is a better reconstruction algorithm than the TV algorithm in the reconstruction of image having not obvious piecewise constant feature and obvious grayscale fluctuation feature. The proposed HOTV algorithm can be extended to CT reconstruction under different scanning modes and other imaging modalities.
Reference | Related Articles | Metrics
Target tracking algorithm based on kernelized correlation filter with block-based model
XU Xiaochao, YAN Hua
Journal of Computer Applications    2020, 40 (3): 683-688.   DOI: 10.11772/j.issn.1001-9081.2019071173
Abstract316)      PDF (1929KB)(342)       Save
To reduce the influence of factors such as illumination variation, scale variation, partial occlusion in target tracking, a target tracking algorithm based on Kernelized Correlation Filter (KCF) with block-based model was proposed. Firstly, the feature of histogram of oriented gradients and the feature of color name were combined to better characterize the target. Secondly, the method of scale pyramid was adopted to estimate the target scale. Finally, the peak to sidelobe ratio of the feature response map was used to detect occlusion, and the partial occlusion problem was solved by introducing a high-confidence block relocation module and a dynamic strategy for model adaptive updating. To verify the effectiveness of the proposed algorithm, comparative experiments with several mainstream algorithms on various datasets were conducted. Experimental results show that the proposed algorithm has the highest precision and success rate which are respectively 11.89% and 15.24% higher than those of KCF algorithm, indicating that the proposed algorithm has stronger robustness in dealing with factors like illumination variation, scale variation and partial occlusion.
Reference | Related Articles | Metrics
Garbage collection algorithm for NAND flash memory based on logical region heat
LEI Bingbing, YAN Hua
Journal of Computer Applications    2017, 37 (4): 1149-1152.   DOI: 10.11772/j.issn.1001-9081.2017.04.1149
Abstract510)      PDF (808KB)(458)       Save
To solve the problems of low collection performance, poor wear leveling effect, and high memory overhead in the existing NAND flash memory garbage collection algorithms, a new garbage collection algorithm based on logical region heat was proposed. The heat calculation formula was redefined, the NAND memory of continuous logical address was defined as a heat range which was used to replace the heat of logical page, then the data with different heat was separated into the corresponding flash blocks with different erase counts. The cold and hot data were effectively separated,and the memory space was also saved. Meanwhile, a new collection cost function was constructed to improve the collection efficiency and wear leveling effect. The experimental results showed that compared with the excellent File-aware Garbage Collection (FaGC) algorithm, the total number of erase operations was reduced by 11%, the total number of copy operations was reduced by 13%, the maximum difference of erase counts was reduced by 42%, and the memory consumption was reduced by 75%. Therefore, the available flash memory space can be increased, the read and write performance of flash memory can be improved, and the flash memory life can be also extended by using the proposed algorithm.
Reference | Related Articles | Metrics
Wear-leveling algorithm for NAND flash memory based on separation of hot and cold logic pages
WANG Jinyang, YAN Hua
Journal of Computer Applications    2016, 36 (5): 1430-1433.   DOI: 10.11772/j.issn.1001-9081.2016.05.1430
Abstract372)      PDF (671KB)(468)       Save
According to the problem of the existing garbage collection algorithm for NAND flash memory, an efficient algorithm, called AWGC (Age With Garbage Collection), was presented to improve wear leveling of NAND flash memory. A hybrid policy with the age of invalid page, erase count of physical blocks and the update frequency of physical blocks were redefined to select the returnable block. Meanwhile, a new heat calculation method logic pages was deduced, and cold-hot separating of valid pages in returnable block was conducted. Compared with the GReedy (GR) algorithm, Cost-Benefit (CB) algorithm, Cost-Age-Time (CAT) algorithm and File-aware Garbage Collection (FaGC) algorithm, not only some good results in wear leveling have been got, but also the total numbers of erase and copy operations have significantly been reduced.
Reference | Related Articles | Metrics
Petrol-oil and lubricants support model based on multiple time windows
YAN Hua, GAO Li, LIU Guoyong, WANG Hongqi
Journal of Computer Applications    2015, 35 (7): 2096-2100.   DOI: 10.11772/j.issn.1001-9081.2015.07.2096
Abstract515)      PDF (762KB)(443)       Save

In this paper, the military Petrol-Oil and Lubricants (POL) allotment and transportation problem was studied by introducing the concept of support time window. Considering the complicated restrictions of POL support time and transportation capability, the POL allotment and transportation model based on multiple time windows was proposed by using Constraint Satisfaction Problem (CSP) modelling approach. Firstly, the formalized description of the problem elements was presented, such as POL support station, demand unit, support time window, support demand, and support task. Based on the formalized description, the CSP model for POL support was constructed. The multi-objective model was transformed into single-objective one by using perfect point method. Finally, the solving procedure and its steps were designed based on Particle Swarm Optimization (PSO) algorithm, and an arithmetic example was followed to demonstrate the application of the method. In the example, the two optimization schemes obtained by the model given in this paper and got by the model in which the objective is maximizing the quantity supported were compared. In the two schemes, the transportation capacity both reached a maximum utilization, but the start supporting time of each POL demand in the scheme of the proposed method was no later than the one in the scheme of the single-objective model. By comparing different optimization schemes, it is shown that the proposed model and algorithm can effectively solve the multi-objective POL support optimization problem.

Reference | Related Articles | Metrics
Node behavior and identity-based trusted authentication in wireless sensor networks
LIU Tao XIONG Yan HUANG Wenchao LU Qiwei GONG Xudong
Journal of Computer Applications    2013, 33 (07): 1842-1845.   DOI: 10.11772/j.issn.1001-9081.2013.07.1842
Abstract1543)      PDF (833KB)(721)       Save
Concerning the vulnerability to attack from external and internal nodes and node failure due to openness and limited resources in Wireless Sensor Network (WSN), an efficient, secure trusted authentication scheme was proposed. The theory of identity-based and bilinear pairings was adopted in the authentication key agreement and update. The node trust value was computed by node behavior reputation management based on Beta distribution. The symmetric cryptosystem combined with message authentication code was used in certification process between trusted nodes which were identified by the trust value. The scheme not only can prevent eavesdropping, injection, replay, denial of service and other external attacks, but also is able to withstand internal threats such as the selective forwarding, Wormhole attack, Sinkhole attack and Sybil attack. The analysis and comparison with SPINS scheme show that the scheme can achieve longer network lifetime, smaller certification delay, greater security and scalability in the same network environment. The scheme has good application value in unattended WSN with high safety requirements.
Reference | Related Articles | Metrics
Self-localization algorithm for sensor networks using SVM classification region
Ming LIU Ting-ting WANG Xiao-yan HUANG Rui LIU
Journal of Computer Applications   
Abstract1929)      PDF (755KB)(2559)       Save
Focused on the requirements of low cost and low power in Wireless Sensor Network (WSN), this paper proposed a range-free localization algorithm based on Support Vector Machine (SVM) classification regions. First, SVM constructed a binary decision tree classifier via learning of the training data. Then the classifier determined the certain classification region where the unknown nodes were located. Finally, the study used the region's center point as the estimated position of the unknown node. The proposed algorithm required mere connectivity information (i.e., hop counts only), so as to reduce the network cost and communication load. The simulation results show that this algorithm alleviates the coverage holes and border problem significantly while certain location accuracy is assured.
Related Articles | Metrics